1,366 research outputs found

    Impact of Annotation Difficulty on Automatically Detecting Problem Localization of Peer-Review Feedback

    Get PDF
    We believe that providing assessment on students ’ reviewing performance will enable students to improve the quality of their peer reviews. We focus on assessing one particular aspect of the textual feedback contained in a peer review – the presence or absence of problem localization; feedback containing problem localization has been shown to be associated with increased understanding and implementation of the feedback. While in prior work we demonstrated the feasibility of learning to predict problem localization using linguistic features automatically extracted from textual feedback, we hypothesize that inter-annotator disagreement on labeling problem localization might impact both the accuracy and the content of the predictive models. To test this hypothesis, we compare the use of feedback examples where problem localization is labeled with differing levels of annotator agreement, for both training and testing our models. Our results show that when models are trained and tested using only feedback where annotators agree on problem localization, the models both perform with high accuracy, and contain rules involving just two simple linguistic features. In contrast, when training and testing using feedback examples where annotators both agree and disagree, the model performance slightly drops, but the learned rules capture more subtle patterns of problem localization. Keywords problem localization in text comments, data mining of peer reviews, inter-annotator agreement, natural langua

    Incentivizing High Quality Crowdwork

    Full text link
    We study the causal effects of financial incentives on the quality of crowdwork. We focus on performance-based payments (PBPs), bonus payments awarded to workers for producing high quality work. We design and run randomized behavioral experiments on the popular crowdsourcing platform Amazon Mechanical Turk with the goal of understanding when, where, and why PBPs help, identifying properties of the payment, payment structure, and the task itself that make them most effective. We provide examples of tasks for which PBPs do improve quality. For such tasks, the effectiveness of PBPs is not too sensitive to the threshold for quality required to receive the bonus, while the magnitude of the bonus must be large enough to make the reward salient. We also present examples of tasks for which PBPs do not improve quality. Our results suggest that for PBPs to improve quality, the task must be effort-responsive: the task must allow workers to produce higher quality work by exerting more effort. We also give a simple method to determine if a task is effort-responsive a priori. Furthermore, our experiments suggest that all payments on Mechanical Turk are, to some degree, implicitly performance-based in that workers believe their work may be rejected if their performance is sufficiently poor. Finally, we propose a new model of worker behavior that extends the standard principal-agent model from economics to include a worker's subjective beliefs about his likelihood of being paid, and show that the predictions of this model are in line with our experimental findings. This model may be useful as a foundation for theoretical studies of incentives in crowdsourcing markets.Comment: This is a preprint of an Article accepted for publication in WWW \c{opyright} 2015 International World Wide Web Conference Committe

    Panel III: Politics and the Public in IP & Info Law Policy Making

    Get PDF
    We have been moving gradually from the theoretical to the practical. Having examined the impact of critical legal studies ( CLS ) in the academy and having discussed the intersection between scholarship and activism, we now turn to the nitty-gritty questions of how to actually enact change in intellectual property and information law and policy

    Panel III: Politics and the Public in IP & Info Law Policy Making

    Get PDF
    We have been moving gradually from the theoretical to the practical. Having examined the impact of critical legal studies ( CLS ) in the academy and having discussed the intersection between scholarship and activism, we now turn to the nitty-gritty questions of how to actually enact change in intellectual property and information law and policy

    Long-term, Real-world Safety of Adalimumab in Rheumatoid Arthritis: Analysis of a Prospective US-Based Registry

    Get PDF
    OBJECTIVE: To assess long-term safety in a US cohort of rheumatoid arthritis (RA) patients treated with adalimumab in real-world clinical care settings. METHODS: This observational study analyzed the long-term incidence of safety outcomes among RA patients initiating adalimumab using data from the Corrona RA registry. Patients were adults ( \u3e /=18 years) who initiated adalimumab treatment between January 2008 and June 2017, and who had at least 1 follow-up visit. RESULTS: In total, 2798 adalimumab initiators were available for analysis, with a mean age of 54.5 years, 77% female, and mean duration of disease of 8.3 years. Nearly half (48%) were biologic naive, and 9% were using prednisone \u3e /=10 mg at adalimumab initiation. The incidence rates per 100 person-years for serious infections, congestive heart failure requiring hospitalization, malignancy (excluding nonmelanoma skin cancer), and all-cause mortality were 1.86, 0.15, 0.64, and 0.33, respectively. The incidence of serious infections was higher in the first year of therapy (3.44 [95% confidence interval: 2.45-4.84]) than subsequent years, while other measured AEs did not vary substantially by duration of exposure. The median time to adalimumab discontinuation was 11 months, while the median time to first serious infection among those experiencing a serious infection event was 12 months. CONCLUSION: Analysis of long-term data from this prospective real-world registry demonstrated a safety profile consistent with previous studies in patients with RA. This analysis did not identify any new safety signals associated with adalimumab treatment and provides valuable guidance for physicians prescribing adalimumab for extended periods of time

    Estimation of interdomain flexibility of N-terminus of factor H using residual dipolar couplings

    Get PDF
    Characterization of segmental flexibility is needed to understand the biological mechanisms of the very large category of functionally diverse proteins, exemplified by the regulators of complement activation, that consist of numerous compact modules or domains linked by short, potentially flexible, sequences of amino acid residues. The use of NMR-derived residual dipolar couplings (RDCs), in magnetically aligned media, to evaluate interdomain motion is established but only for two-domain proteins. We focused on the three N-terminal domains (called CCPs or SCRs) of the important complement regulator, human factor H (i.e. FH1-3). These domains cooperate to facilitate cleavage of the key complement activation-specific protein fragment, C3b, forming iC3b that no longer participates in the complement cascade. We refined a three-dimensional solution structure of recombinant FH1-3 based on nuclear Overhauser effects and RDCs. We then employed a rudimentary series of RDC datasets, collected in media containing magnetically aligned bicelles (disk-like particles formed from phospholipids) under three different conditions, to estimate interdomain motions. This circumvents a requirement of previous approaches for technically difficult collection of five independent RDC datasets. More than 80% of conformers of this predominantly extended three-domain molecule exhibit flexions of < 40 °. Such segmental flexibility (together with the local dynamics of the hypervariable loop within domain 3), could facilitate recognition of C3b via initial anchoring and eventual reorganization of modules to the conformation captured in the previously solved crystal structure of a C3b:FH1-4 complex

    Conversation acts in task-oriented spoken dialogue

    Get PDF
    A linguistic form\u27s compositional, timeless meaning can be surrounded or even contradicted by various social, aesthetic, or analogistic companion meanings. This paper addresses a series of problems in the structure of spoken language discourse, including turn-taking and grounding. It views these processes as composed of fine-grained actions, which resemble speech acts both in resulting from a computational mechanism of planning and in having a rich relationship to the specific linguistic features which serve to indicate their presence. The resulting notion of Conversation Acts is more general than speech act theory, encompassing not only the traditional speech acts but turn-taking, grounding, and higher-level argumentation acts as well. Furthermore, the traditional speech acts in this scheme become fully joint actions, whose successful performance requires full listener participation. This paper presents a detailed analysis of spoken language dialogue. It shows the role of each class of conversation acts in discourse structure, and discusses how members of each class can be recognized in conversation. Conversation acts, it will be seen, better account for the success of conversation than speech act theory alone

    Analyzing collaborative learning processes automatically

    Get PDF
    In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in

    SHREC 2011: robust feature detection and description benchmark

    Full text link
    Feature-based approaches have recently become very popular in computer vision and image analysis applications, and are becoming a promising direction in shape retrieval. SHREC'11 robust feature detection and description benchmark simulates the feature detection and description stages of feature-based shape retrieval algorithms. The benchmark tests the performance of shape feature detectors and descriptors under a wide variety of transformations. The benchmark allows evaluating how algorithms cope with certain classes of transformations and strength of the transformations that can be dealt with. The present paper is a report of the SHREC'11 robust feature detection and description benchmark results.Comment: This is a full version of the SHREC'11 report published in 3DO
    corecore